skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Plumley, Robert D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Undergraduates enrolled in large, active learning courses must self-regulate their learning (self-regulated learning [SRL]) by appraising tasks, making plans, setting goals, and enacting and monitoring strategies. SRL researchers have relied on self-report and learner-mediated methods during academic tasks studied in laboratories and now collect digital event data when learners engage with technology-based tools in classrooms. Inferring SRL processes from digital events and testing their validity is challenging. We aligned digital and verbal SRL event data to validate digital events as traces of SRL and used them to predict achievement in lab and course settings. In Study 1, we sampled a learning task from a biology course into a laboratory setting. Enrolled students (N = 48) completed the lesson using digital resources (e.g., online textbook, course site) while thinking aloud weeks before it was taught in class. Analyses confirmed that 10 digital events reliably co-occurred ≥70% of the time with verbalized task definition and strategy use macroprocesses. Some digital events co-occurred with multiple verbalized SRL macroprocesses. Variance in occurrence of validated digital events was limited in lab sessions, and they explained statistically nonsignificant variance in learners’ performance on lesson quizzes. In Study 2, lesson-specific digital event data from learners (N = 307) enrolled in the course (but not in Study 1) predicted performance on lesson-specific exam items, final exams, and course grades. Validated digital events also predicted final exam and course grades in the next semester (N = 432). Digital events can be validated to reflect SRL processes and scaled to explain achievement in naturalistic undergraduate education settings. Educational Impact and Implications Statement Instructors often have difficulty identifying and helping struggling students in courses with hundreds of students. Digital trace data can be used to efficiently and effectively identify struggling students in these large courses, but such data are often difficult to interpret with confidence. In our study, we found that using verbal trace data to augment and validate our inferences about the meaning of digital trace data resulted in a powerful set of predictors of students’ achievement. These validated digital trace data can be used to not only identify students in need of support in large classes, but also to understand how to target interventions to the aspects of learning that are causing students the most difficulty. 
    more » « less
    Free, publicly-accessible full text available February 1, 2026
  2. Undergraduate science, technology, engineering, and mathematics (STEM) students’ motivations have a strong influence on whether and how they will persist through challenging coursework and into STEM careers. Proper conceptualization and measurement of motivation constructs, such as students’ expectancies and per- ceptions of value and cost (i.e., expectancy value theory [EVT]) and their goals (i.e., achievement goal theory [AGT]), are necessary to understand and enhance STEM persistence and success. Research findings suggest the importance of exploring multiple measurement models for motivation constructs, including traditional con- firmatory factor analysis, exploratory structural equation models (ESEM), and bifactor models, but more research is needed to determine whether the same model fits best across time and context. As such, we mea- sured undergraduate biology students’ EVT and AGT motivations and investigated which measurement model best fit the data, and whether measurement invariance held, across three semesters. Having determined the best- fitting measurement model and type of invariance, we used scores from the best performing model to predict biology achievement. Measurement results indicated a bifactor-ESEM model had the best data-model fit for EVT and an ESEM model had the best data-model fit for AGT, with evidence of measurement invariance across semesters. Motivation factors, in particular attainment value and subjective task value, predicted small yet statistically significant amounts of variance in biology course outcomes each semester. Our findings provide support for using modern measurement models to capture students’ STEM motivations and potentially refine conceptualizations of them. Such future research will enhance educators’ ability to benevolently monitor and support students’ motivation, and enhance STEM performance and career success. 
    more » « less
  3. Abstract Using traces of behaviors to predict outcomes is useful in varied contexts ranging from buyer behaviors to behaviors collected from smart-home devices. Increasingly, higher education systems have been using Learning Management System (LMS) digital data to capture and understand students’ learning and well-being. Researchers in the social sciences are increasingly interested in the potential of using digital log data to predict outcomes and design interventions. Using LMS data for predicting the likelihood of students’ success in for-credit college courses provides a useful example of how social scientists can use these techniques on a variety of data types. Here, we provide a primer on how LMS data can be feature-mapped and analyzed to accomplish these goals. We begin with a literature review summarizing current approaches to analyzing LMS data, then discuss ethical issues of privacy when using demographic data and equitable model building. In the second part of the paper, we provide an overview of popular machine learning algorithms and review analytic considerations such as feature generation, assessment of model performance, and sampling techniques. Finally, we conclude with an empirical example demonstrating the ability of LMS data to predict student success, summarizing important features and assessing model performance across different model specifications. 
    more » « less